Search results for "Memory management"

showing 10 items of 15 documents

Impact of the erase algorithms on flash memory lifetime

2017

This paper presents a comparative study on the impact of the erase algorithm on flash memory lifetime, to demonstrate how the reduction of overall stress, suffered by memories, will increase their lifetime, thanks to a smart management of erase operations. To this purpose a fixed erase voltage, equal to the maximum value and the maximum time-window, was taken as the reference test; while an algorithm with adaptive voltage levels and the same overall time-window was designed and implemented in order to compare their experimental results. This study was carried out by using an innovative Automated Test Equipment, named Portable-ATE, tailored for Memory Test Chip and designed for performance e…

010302 applied physicsAdaptive algorithmComputer science02 engineering and technologyChip01 natural sciencesFlash memory020202 computer hardware & architectureReduction (complexity)Automatic test equipmentMemory managementBuilt-in self-test0103 physical sciences0202 electrical engineering electronic engineering information engineeringAlgorithm designAlgorithm2017 13th Conference on Ph.D. Research in Microelectronics and Electronics (PRIME)
researchProduct

Online Management of Hybrid DRAM-NVMM Memory for HPC

2019

Non-volatile main memories (NVMMs) offer a comparable performance to DRAM, while requiring lower static power consumption and enabling higher densities. NVMM therefore can provide opportunities for improving both energy efficiency and costs of main memory. Previous hybrid main memory management approaches for HPC either do not consider the unique characteristics of NVMMs, depend on high profiling costs, or need source code modifications. In this paper, we investigate HPC applications' behaviors in the presence of NVMM as part of the main memory. By performing a comprehensive study of HPC applications and based on several key observations, we propose an online hybrid memory architecture for …

010302 applied physicsProfiling (computer programming)Source codebusiness.industryComputer sciencemedia_common.quotation_subject02 engineering and technology01 natural sciences020202 computer hardware & architectureNon-volatile memoryMemory managementEmbedded system0103 physical sciencesMemory architecture0202 electrical engineering electronic engineering information engineeringKey (cryptography)businessDrammedia_common2019 IEEE 26th International Conference on High Performance Computing, Data, and Analytics (HiPC)
researchProduct

WarpDrive: Massively Parallel Hashing on Multi-GPU Nodes

2018

Hash maps are among the most versatile data structures in computer science because of their compact data layout and expected constant time complexity for insertion and querying. However, associated memory access patterns during the probing phase are highly irregular resulting in strongly memory-bound implementations. Massively parallel accelerators such as CUDA-enabled GPUs may overcome this limitation by virtue of their fast video memory featuring almost one TB/s bandwidth in comparison to main memory modules of state-of-the-art CPUs with less than 100 GB/s. Unfortunately, the size of hash maps supported by existing single-GPU hashing implementations is restricted by the limited amount of …

020203 distributed computingComputer scienceHash function0102 computer and information sciences02 engineering and technologyParallel computingData structure01 natural sciencesHash tableElectronic mailMemory management010201 computation theory & mathematicsScalability0202 electrical engineering electronic engineering information engineeringMassively parallelTime complexity2018 IEEE International Parallel and Distributed Processing Symposium (IPDPS)
researchProduct

Hyperion

2019

Indexes are essential in data management systems to increase the speed of data retrievals. Widespread data structures to provide fast and memory-efficient indexes are prefix tries. Implementations like Judy, ART, or HOT optimize their internal alignments for cache and vector unit efficiency. While these measures usually improve the performance substantially, they can have a negative impact on memory efficiency. In this paper we present Hyperion, a trie-based main-memory key-value store achieving extreme space efficiency. In contrast to other data structures, Hyperion does not depend on CPU vector units, but scans the data structure linearly. Combined with a custom memory allocator, Hyperion…

0303 health sciencesRange query (data structures)Computer scienceData structurecomputer.software_genreSearch tree03 medical and health sciencesMemory managementTrieMemory footprintData miningCachecomputer030304 developmental biologyProceedings of the 2019 International Conference on Management of Data
researchProduct

Concurrent Computing with Shared Replicated Memory

2019

Any concurrent system can be captured by a concurrent Abstract State Machine (cASM). This remains valid, if different agents can only interact via messages. It even permits a strict separation between memory managing agents and other agents that can only access the shared memory by sending query and update requests. This paper is dedicated to an investigation of replicated data that is maintained by a memory management subsystem, where the replication neither appears in the requests nor in the corresponding answers. We specify the behaviour of a concurrent system with such memory management using concurrent communicating ASMs (ccASMs), provide several refinements addressing different replic…

Computer scienceDistributed computing020207 software engineering0102 computer and information sciences02 engineering and technology01 natural sciencesReplication (computing)Consistency (database systems)Memory managementShared memory010201 computation theory & mathematics0202 electrical engineering electronic engineering information engineeringAbstract state machinesConcurrent computingVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550
researchProduct

Persistent software transactional memory in Haskell

2021

Emerging persistent memory in commodity hardware allows byte-granular accesses to persistent state at memory speeds. However, to prevent inconsistent state in persistent memory due to unexpected system failures, different write-semantics are required compared to volatile memory. Transaction-based library solutions for persistent memory facilitate the atomic modification of persistent data in languages where memory is explicitly managed by the programmer, such as C/C++. For languages that provide extended capabilities like automatic memory management, a more native integration into the language is needed to maintain the high level of memory abstraction. It is shown in this paper how persiste…

Computer scienceProgramming languagecomputer.software_genreRuntime systemSoftware portabilityMemory managementSoftware transactional memoryHaskellPersistent data structureSafety Risk Reliability and QualitycomputerSoftwareGarbage collectioncomputer.programming_languageVolatile memoryProceedings of the ACM on Programming Languages
researchProduct

Memory Resource Management for Real-Time Systems

2007

Dynamic memory storage has been widely used for years in computer science. However, its use in real-time systems has not been considered as an important issue, and memory management has not receive much consideration, whereas today's real-time applications are often characterized by highly fluctuating memory requirements. In this paper we present an approach to dynamic memory management for real-time systems. In response to application behavior and requests, the underlying memory management system adjusts resources to meet changing demands and user needs. The architectural framework that realizes this approach allows adaptive allocation of memory resources to applications involving both per…

Distributed shared memoryDynamic random-access memoryFlat memory modelComputer scienceDistributed computingReal-time computingUniform memory accessApplication softwarecomputer.software_genrelaw.inventionMemory managementlawResource managementDistributed memorycomputer19th Euromicro Conference on Real-Time Systems (ECRTS'07)
researchProduct

uMemristorToolbox: Open source framework to control memristors in Unity for ternary applications

2020

This paper presents uMemristorToolbox, a novel open source framework that reads and writes non-volatile ternary states to memristors. The Unity (C#) framework is a port of the open source Java project Memristor-Discovery and adds a closed-loop ternary memory controller to enable both PC and real-time embedded ternary applications. We validate the closed-loop ternary memory controller in an embedded system case study with 16 M+SDC Tungsten dopant memristors. We measure an average switching speed of 3 Hz, worst case energy usage of 1 μW per switch, 0.03% random write error and no decay in (non-volatile) state retention after 15 minutes. We conclude with observations and open questions when wo…

Hardware_MEMORYSTRUCTURESComputer sciencebusiness.industry020208 electrical & electronic engineeringElectrical engineeringComputingMilieux_LEGALASPECTSOFCOMPUTINGPort (circuit theory)02 engineering and technologyMemristorAC power021001 nanoscience & nanotechnologyMemory controllerlaw.inventionSwitching timeVDP::Teknologi: 500Memory managementlaw0202 electrical engineering electronic engineering information engineeringState (computer science)0210 nano-technologyTernary operationbusiness2020 IEEE 50th International Symposium on Multiple-Valued Logic (ISMVL)
researchProduct

<title>Managing compressed multimedia data in a memory hierarchy: fundamental issues and basic solutions</title>

1998

The purpose of the work is to discuss the fundamental issues and solutions in managing compressed and uncompressed multimedia data, especially voluminous continuous mediatypes (video, audio) and text in a memory hierarchy with four levels (main memory, magnetic disk, (optical or magnetic) on-line/near-line low-speed memory, and slow off-line memory, i.e. archive). We view the multimedia data in such a database to be generated, (compressed), and stored into the memory hierarchy (at the lowest non-archiving level), and subsequently retrieved, (decompressed), and presented. If unused, the data either travels down in the memory hierarchy or it is compressed and stored at the same level. We firs…

Hardware_MEMORYSTRUCTURESFlat memory modelTheoretical computer scienceMultimediaMemory hierarchyComputer scienceThrashingcomputer.software_genreMemory mapMemory managementPhysical addressVirtual memoryInterleaved memorycomputerSPIE Proceedings
researchProduct

Improving Collective I/O Performance Using Non-volatile Memory Devices

2016

Collective I/O is a parallel I/O technique designed to deliver high performance data access to scientific applications running on high-end computing clusters. In collective I/O, write performance is highly dependent upon the storage system response time and limited by the slowest writer. The storage system response time in conjunction with the need for global synchronisation, required during every round of data exchange and write, severely impacts collective I/O performance. Future Exascale systems will have an increasing number of processor cores, while the number of storage servers will remain relatively small. Therefore, the storage system concurrency level will further increase, worseni…

Input/outputFile system020203 distributed computingMulti-core processorbusiness.industryComputer scienceConcurrency020206 networking & telecommunications02 engineering and technologycomputer.software_genreSupercomputerNon-volatile memoryMemory managementData accessServerComputer data storage0202 electrical engineering electronic engineering information engineeringbusinesscomputerComputer network2016 IEEE International Conference on Cluster Computing (CLUSTER)
researchProduct